Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.662
Filtrar
1.
Otol Neurotol ; 45(4): 392-397, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38478407

RESUMO

OBJECTIVE: To assess cochlear implant (CI) sound processor usage over time in children with single-sided deafness (SSD) and identify factors influencing device use. STUDY DESIGN: Retrospective, chart review study. SETTING: Pediatric tertiary referral center. PATIENTS: Children with SSD who received CI between 2014 and 2020. OUTCOME MEASURE: Primary outcome was average daily CI sound processor usage over follow-up. RESULTS: Fifteen children with SSD who underwent CI surgery were categorized based on age of diagnosis and surgery timing. Over an average of 4.3-year follow-up, patients averaged 4.6 hours/day of CI usage. Declining usage trends were noted over time, with the first 2 years postactivation showing higher rates. No significant usage differences emerged based on age, surgery timing, or hearing loss etiology. CONCLUSIONS: Long-term usage decline necessitates further research into barriers and enablers for continued CI use in pediatric SSD cases.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Perda Auditiva Unilateral , Localização de Som , Percepção da Fala , Humanos , Criança , Implantes Cocleares/efeitos adversos , Estudos Retrospectivos , Perda Auditiva Unilateral/cirurgia , Perda Auditiva Unilateral/reabilitação , Localização de Som/fisiologia , Surdez/cirurgia , Surdez/reabilitação , Percepção da Fala/fisiologia , Resultado do Tratamento
2.
Trends Hear ; 28: 23312165241229880, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38545645

RESUMO

Bilateral cochlear implants (BiCIs) result in several benefits, including improvements in speech understanding in noise and sound source localization. However, the benefit bilateral implants provide among recipients varies considerably across individuals. Here we consider one of the reasons for this variability: difference in hearing function between the two ears, that is, interaural asymmetry. Thus far, investigations of interaural asymmetry have been highly specialized within various research areas. The goal of this review is to integrate these studies in one place, motivating future research in the area of interaural asymmetry. We first consider bottom-up processing, where binaural cues are represented using excitation-inhibition of signals from the left ear and right ear, varying with the location of the sound in space, and represented by the lateral superior olive in the auditory brainstem. We then consider top-down processing via predictive coding, which assumes that perception stems from expectations based on context and prior sensory experience, represented by cascading series of cortical circuits. An internal, perceptual model is maintained and updated in light of incoming sensory input. Together, we hope that this amalgamation of physiological, behavioral, and modeling studies will help bridge gaps in the field of binaural hearing and promote a clearer understanding of the implications of interaural asymmetry for future research on optimal patient interventions.


Assuntos
Implante Coclear , Implantes Cocleares , Localização de Som , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Audição , Localização de Som/fisiologia
3.
Neuropsychologia ; 196: 108822, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38342179

RESUMO

Ambient sound can mask acoustic signals. The current study addressed how echolocation in people is affected by masking sound, and the role played by type of sound and spatial (i.e. binaural) similarity. We also investigated the role played by blindness and long-term experience with echolocation, by testing echolocation experts, as well as blind and sighted people new to echolocation. Results were obtained in two echolocation tasks where participants listened to binaural recordings of echolocation and masking sounds, and either localized echoes in azimuth or discriminated echo audibility. Echolocation and masking sounds could be either clicks or broad band noise. An adaptive staircase method was used to adjust signal-to-noise ratios (SNRs) based on participants' responses. When target and masker had the same binaural cues (i.e. both were monoaural sounds), people performed better (i.e. had lower SNRs) when target and masker used different types of sound (e.g. clicks in noise-masker or noise in clicks-masker), as compared to when target and masker used the same type of sound (e.g. clicks in click-, or noise in noise-masker). A very different pattern of results was observed when masker and target differed in their binaural cues, in which case people always performed better when clicks were the masker, regardless of type of emission used. Further, direct comparison between conditions with and without binaural difference revealed binaural release from masking only when clicks were used as emissions and masker, but not otherwise (i.e. when noise was used as masker or emission). This suggests that echolocation with clicks or noise may differ in their sensitivity to binaural cues. We observed the same pattern of results for echolocation experts, and blind and sighted people new to echolocation, suggesting a limited role played by long-term experience or blindness. In addition to generating novel predictions for future work, the findings also inform instruction in echolocation for people who are blind or sighted.


Assuntos
Localização de Som , Animais , Humanos , Localização de Som/fisiologia , Cegueira , Ruído , Acústica , Sinais (Psicologia) , Mascaramento Perceptivo , Estimulação Acústica/métodos
4.
Trends Hear ; 28: 23312165241230947, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38361245

RESUMO

Sound localization is an important ability in everyday life. This study investigates the influence of vision and presentation mode on auditory spatial bisection performance. Subjects were asked to identify the smaller perceived distance between three consecutive stimuli that were either presented via loudspeakers (free field) or via headphones after convolution with generic head-related impulse responses (binaural reproduction). Thirteen azimuthal sound incidence angles on a circular arc segment of ±24° at a radius of 3 m were included in three regions of space (front, rear, and laterally left). Twenty normally sighted (measured both sighted and blindfolded) and eight blind persons participated. Results showed no significant differences with respect to visual condition, but strong effects of sound direction and presentation mode. Psychometric functions were steepest in frontal space and indicated median spatial bisection thresholds of 11°-14°. Thresholds increased significantly in rear (11°-17°) and laterally left (20°-28°) space in free field. Individual pinna and torso cues, as available only in free field presentation, improved the performance of all participants compared to binaural reproduction. Especially in rear space, auditory spatial bisection thresholds were three to four times higher (i.e., poorer) using binaural reproduction than in free field. The results underline the importance of individual auditory spatial cues for spatial bisection, irrespective of access to vision, which indicates that vision may not be strictly necessary to calibrate allocentric spatial hearing.


Assuntos
Localização de Som , Pessoas com Deficiência Visual , Humanos , Percepção Espacial/fisiologia , Cegueira/diagnóstico , Localização de Som/fisiologia , Acústica
5.
Sci Rep ; 14(1): 2469, 2024 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-38291126

RESUMO

Sound localization is essential to perceive the surrounding world and to interact with objects. This ability can be learned across time, and multisensory and motor cues play a crucial role in the learning process. A recent study demonstrated that when training localization skills, reaching to the sound source to determine its position reduced localization errors faster and to a greater extent as compared to just naming sources' positions, despite the fact that in both tasks, participants received the same feedback about the correct position of sound sources in case of wrong response. However, it remains to establish which features have made reaching to sound more effective as compared to naming. In the present study, we introduced a further condition in which the hand is the effector providing the response, but without it reaching toward the space occupied by the target source: the pointing condition. We tested three groups of participants (naming, pointing, and reaching groups) each while performing a sound localization task in normal and altered listening situations (i.e. mild-moderate unilateral hearing loss) simulated through auditory virtual reality technology. The experiment comprised four blocks: during the first and the last block, participants were tested in normal listening condition, while during the second and the third in altered listening condition. We measured their performance, their subjective judgments (e.g. effort), and their head-related behavior (through kinematic tracking). First, people's performance decreased when exposed to asymmetrical mild-moderate hearing impairment, more specifically on the ipsilateral side and for the pointing group. Second, we documented that all groups decreased their localization errors across altered listening blocks, but the extent of this reduction was higher for reaching and pointing as compared to the naming group. Crucially, the reaching group leads to a greater error reduction for the side where the listening alteration was applied. Furthermore, we documented that, across blocks, reaching and pointing groups increased the implementation of head motor behavior during the task (i.e., they increased approaching head movements toward the space of the sound) more than naming. Third, while performance in the unaltered blocks (first and last) was comparable, only the reaching group continued to exhibit a head behavior similar to those developed during the altered blocks (second and third), corroborating the previous observed relationship between the reaching to sounds task and head movements. In conclusion, this study further demonstrated the effectiveness of reaching to sounds as compared to pointing and naming in the learning processes. This effect could be related both to the process of implementing goal-directed motor actions and to the role of reaching actions in fostering the implementation of head-related motor strategies.


Assuntos
Perda Auditiva , Localização de Som , Realidade Virtual , Humanos , Audição/fisiologia , Localização de Som/fisiologia , Testes Auditivos
6.
Artigo em Inglês | MEDLINE | ID: mdl-38227005

RESUMO

The Journal of Comparative Physiology lived up to its name in the last 100 years by including more than 1500 different taxa in almost 10,000 publications. Seventeen phyla of the animal kingdom were represented. The honeybee (Apis mellifera) is the taxon with most publications, followed by locust (Locusta migratoria), crayfishes (Cambarus spp.), and fruitfly (Drosophila melanogaster). The representation of species in this journal in the past, thus, differs much from the 13 model systems as named by the National Institutes of Health (USA). We mention major accomplishments of research on species with specific adaptations, specialist animals, for example, the quantitative description of the processes underlying the axon potential in squid (Loligo forbesii) and the isolation of the first receptor channel in the electric eel (Electrophorus electricus) and electric ray (Torpedo spp.). Future neuroethological work should make the recent genetic and technological developments available for specialist animals. There are many research questions left that may be answered with high yield in specialists and some questions that can only be answered in specialists. Moreover, the adaptations of animals that occupy specific ecological niches often lend themselves to biomimetic applications. We go into some depth in explaining our thoughts in the research of motion vision in insects, sound localization in barn owls, and electroreception in weakly electric fish.


Assuntos
Peixe Elétrico , Localização de Som , Estrigiformes , Animais , Drosophila melanogaster , Localização de Som/fisiologia , Visão Ocular , Electrophorus
7.
Hear Res ; 441: 108922, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38043403

RESUMO

The purpose of our study was to estimate the time interval required for integrating the acoustical changes related to sound motion using both psychophysical and EEG measures. Healthy listeners performed direction identification tasks under dichotic conditions in the delayed-motion paradigm. Minimal audible movement angle (MAMA) has been measured over the range of velocities from 60 to 360 deg/s. We also measured minimal duration of motion, at which the listeners could identify its direction. EEG was recorded in the same group of subjects during passive listening. Motion onset responses (MOR) were analyzed. MAMA increased linearly with motion velocity. Minimum audible angle (MAA) calculated from this linear function was about 2 deg. For higher velocities of the delayed motion, we found 2- to 3-fold better spatial resolution than the one previously reported for motion starting at the sound onset. The time required for optimal discrimination of motion direction was about 34 ms. The main finding of our study was that both direction identification time obtained in the behavioral task and cN1 latency behaved like hyperbolic functions of the sound's velocity. Direction identification time decreased asymptotically to 8 ms, which was considered minimal integration time for the instantaneous shift detection. Peak latency of cN1 also decreased with increasing velocity and asymptotically approached 137 ms. This limit corresponded to the latency of response to the instantaneous sound shift and was 37 ms later than the latency of the sound-onset response. The direction discrimination time (34 ms) was of the same magnitude as the additional time required for motion processing to be reflected in the MOR potential. Thus, MOR latency can be viewed as a neurophysiological index of temporal integration. Based on the findings obtained, we may assume that no measurable MOR would be evoked by slowly moving stimuli as they would reach their MAMAs in a time longer than the optimal integration time.


Assuntos
Percepção de Movimento , Localização de Som , Humanos , Localização de Som/fisiologia , Som , Tempo de Reação/fisiologia , Movimento (Física) , Movimento , Percepção de Movimento/fisiologia
8.
J Neurosci ; 44(1)2024 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-37989591

RESUMO

Interaural time differences (ITDs) are a major cue for sound localization and change with increasing head size. Since the barn owl's head width more than doubles in the month after hatching, we hypothesized that the development of their ITD detection circuit might be modified by experience. To test this, we raised owls with unilateral ear inserts that delayed and attenuated the acoustic signal, and then measured the ITD representation in the brainstem nucleus laminaris (NL) when they were adults. The ITD circuit is composed of delay line inputs to coincidence detectors, and we predicted that plastic changes would lead to shorter delays in the axons from the manipulated ear, and complementary shifts in ITD representation on the two sides. In owls that received ear inserts starting around P14, the maps of ITD shifted in the predicted direction, but only on the ipsilateral side, and only in those tonotopic regions that had not experienced auditory stimulation prior to insertion. The contralateral map did not change. Thus, experience-dependent plasticity of the ITD circuit occurs in NL, and our data suggest that ipsilateral and contralateral delays are independently regulated. As a result, altered auditory input during development leads to long-lasting changes in the representation of ITD.Significance Statement The early life of barn owls is marked by increasing sensitivity to sound, and by increasing ITDs. Their prolonged post-hatch development allowed us to examine the role of altered auditory experience in the development of ITD detection circuits. We raised owls with a unilateral ear insert and found that their maps of ITD were altered by experience, but only in those tonotopic regions ipsilateral to the occluded ear that had not experienced auditory stimulation prior to insertion. This experience-induced plasticity allows the sound localization circuits to be customized to individual characteristics, such as the size of the head, and potentially to compensate for imbalanced hearing sensitivities between the left and right ears.


Assuntos
Localização de Som , Estrigiformes , Animais , Localização de Som/fisiologia , Audição , Tronco Encefálico/fisiologia , Estimulação Acústica , Vias Auditivas/fisiologia
9.
Laryngoscope ; 134(2): 919-925, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37466238

RESUMO

OBJECTIVE: To assess the perceived benefit of cochlear implant (CI) use for children with unilateral hearing loss (UHL) and evaluate whether perceived abilities are associated with performance on measures of speech recognition and spatial hearing. METHOD: Nineteen children with moderate-to-profound UHL underwent cochlear implantation. The Speech Spatial and Qualities of Hearing Questionnaire modified for children (SSQ-C) were completed by parental proxy pre-operatively and at 3, 6, 9, 12, 18, and 24 months post-activation. Linear mixed models evaluated perceived benefits over the study period. Pearson correlations assessed the association between subjective report and performance on measures of word recognition with the CI alone and spatial hearing (speech recognition in spatially-separated noise and sound source localization) in the combined condition (CI plus contralateral ear). RESULTS: For the SSQ-C, parents reported significant improvements with CI use as compared to pre-operative perceptions (p < 0.001); improved perceptions were either maintained or continued to improve over the 2-year post-activation period. Perceived benefit did not significantly correlate with word recognition with the CI alone or spatial hearing outcomes in the combined condition. CONCLUSION: Families of children with UHL observed benefits of CI use early after cochlear implantation that was maintained with long-term device use. Responses to subjective measures may broaden our understanding of the experiences of pediatric CI users with UHL in addition to outcomes on typical measures of CI performance. LEVEL OF EVIDENCE: 3 Laryngoscope, 134:919-925, 2024.


Assuntos
Implante Coclear , Implantes Cocleares , Perda Auditiva Unilateral , Localização de Som , Percepção da Fala , Humanos , Criança , Perda Auditiva Unilateral/cirurgia , Perda Auditiva Unilateral/reabilitação , Percepção da Fala/fisiologia , Audição , Localização de Som/fisiologia , Resultado do Tratamento
10.
Exp Brain Res ; 242(1): 241-255, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38006421

RESUMO

Previous studies have identified a 'defensive graded field' in the peripersonal front space where potential threatening stimuli induce stronger blink responses, mainly modulated by top-down mechanisms, which include various factors, such as proximity to the body, stimulus valence, and social cues. However, very little is known about the mechanisms responsible for representation of the back space and the possible role of bottom-up information. By means of acoustic stimuli, we evaluated individuals' representation for front and back space in an ambiguous environment that offered some degree of uncertainty in terms of both distance (close vs. far) and front-back egocentric location of sound sources. We aimed to consider verbal responses about localization of sound sources and EMG data on blink reflex. Results suggested that stimulus distance evaluations were better explained by subjective front-back discrimination, rather than real position. Moreover, blink response data were also better explained by subjective front-back discrimination. Taken together, these findings suggest that the mechanisms that dictate blink response magnitude might also affect sound localization (possible bottom-up mechanism), probably interacting with top-down mechanisms that modulate stimuli location and distance. These findings are interpreted within the defensive peripersonal framework, suggesting a close relationship between bottom-up and top-down mechanisms on spatial representation.


Assuntos
Espaço Pessoal , Localização de Som , Humanos , Piscadela , Localização de Som/fisiologia , Sinais (Psicologia)
11.
Front Neural Circuits ; 17: 1307283, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38107610

RESUMO

Auditory brainstem neurons in the lateral superior olive (LSO) receive excitatory input from the ipsilateral cochlear nucleus (CN) and inhibitory transmission from the contralateral CN via the medial nucleus of the trapezoid body (MNTB). This circuit enables sound localization using interaural level differences. Early studies have observed an additional inhibitory input originating from the ipsilateral side. However, many of its details, such as its origin, remained elusive. Employing electrical and optical stimulation of afferents in acute mouse brainstem slices and anatomical tracing, we here describe a glycinergic projection to LSO principal neurons that originates from the ipsilateral CN. This inhibitory synaptic input likely mediates inhibitory sidebands of LSO neurons in response to acoustic stimulation.


Assuntos
Núcleo Coclear , Localização de Som , Complexo Olivar Superior , Animais , Camundongos , Complexo Olivar Superior/fisiologia , Núcleo Coclear/fisiologia , Núcleo Olivar/fisiologia , Localização de Som/fisiologia , Neurônios/fisiologia , Vias Auditivas/fisiologia
12.
J Acoust Soc Am ; 154(4): 2191-2202, 2023 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-37815410

RESUMO

Psychophysical experiments explored how the repeated presentation of a context, consisting of an adaptor and a target, induces plasticity in the localization of an identical target presented alone on interleaved trials. The plasticity, and its time course, was examined both in a classroom and in an anechoic chamber. Adaptors and targets were 2 ms noise clicks and listeners were tasked with localizing the targets while ignoring the adaptors (when present). The context was either simple, consisting of a single-click adaptor and a target, or complex, containing either a single-click or an eight-click adaptor that varied from trial to trial. The adaptor was presented either from a frontal or a lateral location, fixed within a run. The presence of context caused responses to the isolated targets to be displaced up to 14° away from the adaptor location. This effect was stronger and slower if the context was complex, growing over the 5 min duration of the runs. Additionally, the simple context buildup had a slower onset in the classroom. Overall, the results illustrate that sound localization is subject to slow adaptive processes that depend on the spatial and temporal structure of the context and on the level of reverberation in the environment.


Assuntos
Localização de Som , Localização de Som/fisiologia , Ruído/efeitos adversos , Fatores de Tempo
13.
J Neurophysiol ; 130(4): 953-966, 2023 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-37701942

RESUMO

The auditory system of female crickets allows them to specifically recognize and approach the species-specific male calling song, defined by sound pulses and silent intervals. Auditory brain neurons form a delay-line and coincidence detector network tuned to the pulse period of the male song. We analyzed the impact of changes in pulse duration on the behavior and the responses of the auditory neurons and the network. We confirm that the ascending neuron AN1 and the local neuron LN2 copy the temporal structure of the song. During ongoing long sound pulses, the delay-line neuron LN5 shows additional rebound responses and the coincidence detector neuron LN3 can generate additional bursts of activity, indicating that these may be driven by intrinsic oscillations of the network. Moreover, the response of the feature detector neuron LN4 is shaped by a combination of inhibitory and excitatory synaptic inputs, and LN4 responds even to long sound pulses with a short depolarization and burst of spikes, like to a sound pulse of natural duration. This response property of LN4 indicates a selective auditory pulse duration filter mechanism of the pattern recognition network, which is tuned to the duration of natural pulses. Comparing the tuning of the phonotactic behavior with the tuning of the local auditory brain neurons to the same test patterns, we find no evidence that a modulation of the phonotactic behavior is reflected at the level of the feature detector neurons. This rather suggests that steering to nonattractive pulse patterns is organized at the thoracic level.NEW & NOTEWORTHY Pulse period selectivity has been reported for the cricket delay-line and coincidence detector network, whereas pulse duration selectivity is evident from behavioral tests. Pulses of increasing duration elicit responses in the pattern recognition neurons, which do not parallel the behavioral responses and indicate additional processing mechanisms. Long sound pulses elicit rhythmic rebound activity and additional bursts, whereas the feature detector neuron reveals a pulse duration filter, expanding our understanding of the pattern recognition process.


Assuntos
Gryllidae , Localização de Som , Animais , Feminino , Masculino , Percepção Auditiva/fisiologia , Gryllidae/fisiologia , Encéfalo/fisiologia , Som , Neurônios/fisiologia , Estimulação Acústica , Localização de Som/fisiologia
14.
Trends Hear ; 27: 23312165231201020, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37715636

RESUMO

The ventriloquism aftereffect (VAE), observed as a shift in the perceived locations of sounds after audio-visual stimulation, requires reference frame (RF) alignment since hearing and vision encode space in different RFs (head-centered vs. eye-centered). Previous experimental studies reported inconsistent results, observing either a mixture of head-centered and eye-centered frames, or a predominantly head-centered frame. Here, a computational model is introduced, examining the neural mechanisms underlying these effects. The basic model version assumes that the auditory spatial map is head-centered and the visual signals are converted to head-centered frame prior to inducing the adaptation. Two mechanisms are considered as extended model versions to describe the mixed-frame experimental data: (1) additional presence of visual signals in eye-centered frame and (2) eye-gaze direction-dependent attenuation in VAE when eyes shift away from the training fixation. Simulation results show that the mixed-frame results are mainly due to the second mechanism, suggesting that the RF of VAE is mainly head-centered. Additionally, a mechanism is proposed to explain a new ventriloquism-aftereffect-like phenomenon in which adaptation is induced by aligned audio-visual signals when saccades are used for responding to auditory targets. A version of the model extended to consider such response-method-related biases accurately predicts the new phenomenon. When attempting to model all the experimentally observed phenomena simultaneously, the model predictions are qualitatively similar but less accurate, suggesting that the proposed neural mechanisms interact in a more complex way than assumed in the model.


Assuntos
Localização de Som , Humanos , Localização de Som/fisiologia , Estimulação Acústica/métodos , Movimentos Sacádicos , Som , Estimulação Luminosa/métodos , Percepção Visual/fisiologia
15.
J Acoust Soc Am ; 154(2): 661-670, 2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-37540095

RESUMO

Front-back reversals (FBRs) in sound-source localization tasks due to cone-of-confusion errors on the azimuth plane occur with some regularity, and their occurrence is listener-dependent. There are fewer FBRs for wideband, high-frequency sounds than for low-frequency sounds presumably because the sources of low-frequency sounds are localized on the basis of interaural differences (interaural time and level differences), which can lead to ambiguous responses. Spectral cues can aid in determining sound-source locations for wideband, high-frequency sounds, and such spectral cues do not lead to ambiguous responses. However, to what extent spectral features might aid sound-source localization is still not known. This paper explores conditions in which the spectral profile of two-octave wide noise bands, whose sources were localized on the azimuth plane, were randomly varied. The experiment demonstrated that such spectral profile randomization increased FBRs for high-frequency noise bands, presumably because whatever spectral features are used for sound-source localization were no longer as useful for resolving FBRs, and listeners relied on interaural differences for sound-source localization, which led to response ambiguities. Additionally, head rotation decreased FBRs in all cases, even when FBRs increased due to spectral profile randomization. In all cases, the occurrence of FBRs was listener-dependent.


Assuntos
Sinais (Psicologia) , Localização de Som , Estimulação Acústica , Ruído/efeitos adversos , Som , Localização de Som/fisiologia , Humanos
16.
Hear Res ; 437: 108839, 2023 09 15.
Artigo em Inglês | MEDLINE | ID: mdl-37429100

RESUMO

The binaural interaction component (BIC) of the auditory brainstem response (ABR) is the difference obtained after subtracting the sum of right and left ear ABRs from binaurally evoked ABRs. The BIC has attracted interest as a biomarker of binaural processing abilities. Best binaural processing is presumed to require spectrally-matched inputs at the two ears, but peripheral pathology and/or impacts of hearing devices can lead to mismatched inputs. Such mismatching can degrade behavioral sensitivity to interaural time difference (ITD) cues, but might be detected using the BIC. Here, we examine the effect of interaural frequency mismatch (IFM) on BIC and behavioral ITD sensitivity in audiometrically normal adult human subjects (both sexes). Binaural and monaural ABRs were recorded and BICs computed from subjects in response to narrowband tones. Left ear stimuli were fixed at 4000 Hz while right ear stimuli varied over a ∼2-octave range (re: 4000 Hz). Separately, subjects performed psychophysical lateralization tasks using the same stimuli to determine ITD discrimination thresholds jointly as a function of IFM and sound level. Results demonstrated significant effects of IFM on BIC amplitudes, with lower amplitudes in mismatched conditions than frequency-matched. Behavioral ITD discrimination thresholds were elevated at mismatched frequencies and lower sound levels, but also more sharply modulated by IFM at lower sound levels. Combinations of ITD, IFM and overall sound level that resulted in fused and lateralized percepts were bound by the empirically-measured BIC, and also by model predictions simulated using an established computational model of the brainstem circuit thought to generate the BIC.


Assuntos
Potenciais Evocados Auditivos do Tronco Encefálico , Localização de Som , Masculino , Adulto , Feminino , Humanos , Estimulação Acústica/métodos , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Tronco Encefálico/fisiologia , Eletroencefalografia , Localização de Som/fisiologia
17.
Neuropsychologia ; 188: 108629, 2023 09 09.
Artigo em Inglês | MEDLINE | ID: mdl-37356539

RESUMO

Recent studies show that the classical model based on axonal delay-lines may not explain interaural time difference (ITD) based spatial coding in humans. Instead, a population-code model called "opponent channels model" (OCM) has been suggested. This model comprises two competing channels respectively for the two auditory hemifields, each with a sigmoidal tuning curve. Event-related potentials (ERPs) to ITD-changes are used in some studies to test the predictions of this model by considering the sounds before and after the change as adaptor and probe stimuli, respectively. It is assumed in these studies that the former stimulus causes adaptation of the neurons selective to its side, and that the ERP N1-P2 response to the ITD-change is the specific response of the neurons with selectivity to the side of probe sound. However, these ERP components are known as a global, non-specific acoustic change complex of cortical origin evoked by any change in the auditory environment. It probably does not genuinely reflect the activity of some stimulus-specific neuronal units that have escaped the refractory effect of the preceding adaptor, which means a violation of the crucial assumption in an adaptor-probe paradigm. To assess this viewpoint, we conducted two experiments. In the first one, we recorded ERPs to abrupt lateralization shifts of click trains having various pre- and post-shift ITDs within the physiological range of -600µs to +600µs. Magnitudes of the ERP components P1, N1, and P2 to these ITD-shifts did not comply with the additive behavior of partial probe responses presumed for an adaptor-probe paradigm, casting doubt on the accuracy of testing sensory coding models by using ERPs to abrupt lateralization changes. Findings of the second experiment, involving ERPs to conjoint outwards/transverse shift stimuli also supported this conclusion.


Assuntos
Córtex Auditivo , Localização de Som , Humanos , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica , Localização de Som/fisiologia , Eletroencefalografia , Córtex Auditivo/fisiologia
18.
J Comp Neurol ; 531(18): 1893-1896, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37357573

RESUMO

Pandya made many important contributions to the understanding of the anatomy of the cortical auditory pathways beginning with his publication in 1969. This review focuses on the observation in that article on the transcallosal connections of the primary auditory cortex. The medial part of the cortex has such connections, but the lateral part does not. Pandya and colleagues speculated that this might have something to do with spatial localization of sound. Review of the subsequent literature shows that the primary auditory cortex anatomy is complex, but the original observation is likely correct. However, the physiological speculation was not.


Assuntos
Córtex Auditivo , Localização de Som , Localização de Som/fisiologia , Córtex Auditivo/anatomia & histologia , Estimulação Acústica , Vias Auditivas/fisiologia , Mapeamento Encefálico
19.
J Neurosci ; 43(22): 4093-4109, 2023 05 31.
Artigo em Inglês | MEDLINE | ID: mdl-37130779

RESUMO

The medial superior olive (MSO) is a binaural nucleus that is specialized in detecting the relative arrival times of sounds at both ears. Excitatory inputs to its neurons originating from either ear are segregated to different dendrites. To study the integration of synaptic inputs both within and between dendrites, we made juxtacellular and whole-cell recordings from the MSO in anesthetized female gerbils, while presenting a "double zwuis" stimulus, in which each ear received its own set of tones, which were chosen in a way that all second-order distortion products (DP2s) could be uniquely identified. MSO neurons phase-locked to multiple tones within the multitone stimulus, and vector strength, a measure for spike phase-locking, generally depended linearly on the size of the average subthreshold response to a tone. Subthreshold responses to tones in one ear depended little on the presence of sound in the other ear, suggesting that inputs from different ears sum linearly without a substantial role for somatic inhibition. The "double zwuis" stimulus also evoked response components in the MSO neuron that were phase-locked to DP2s. Bidendritic subthreshold DP2s were quite rare compared with bidendritic suprathreshold DP2s. We observed that in a small subset of cells, the ability to trigger spikes differed substantially between both ears, which might be explained by a dendritic axonal origin. Some neurons that were driven monaurally by only one of the two ears nevertheless showed decent binaural tuning. We conclude that MSO neurons are remarkably good in finding binaural coincidences even among uncorrelated inputs.SIGNIFICANCE STATEMENT Neurons in the medial superior olive are essential for precisely localizing low-frequency sounds in the horizontal plane. From their soma, only two dendrites emerge, which are innervated by inputs originating from different ears. Using a new sound stimulus, we studied the integration of inputs both within and between these dendrites in unprecedented detail. We found evidence that inputs from different dendrites add linearly at the soma, but that small increases in somatic potentials could lead to large increases in the probability of generating a spike. This basic scheme allowed the MSO neurons to detect the relative arrival time of inputs at both dendrites remarkably efficient, although the relative size of these inputs could differ considerably.


Assuntos
Localização de Som , Complexo Olivar Superior , Animais , Feminino , Complexo Olivar Superior/fisiologia , Gerbillinae , Neurônios/fisiologia , Estimulação Acústica , Localização de Som/fisiologia , Núcleo Olivar/fisiologia , Vias Auditivas/fisiologia
20.
Biol Cybern ; 117(1-2): 143-162, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-37129628

RESUMO

A principal cue for sound source localization is the difference in arrival times of sounds at an animal's two ears (interaural time difference, ITD). Neurons that process ITDs are specialized to compare the timing of inputs with submillisecond precision. In the barn owl, ITD processing begins in the nucleus laminaris (NL) region of the auditory brain stem. Remarkably, NL neurons are sensitive to ITDs in high-frequency sounds (kilohertz-range). This contrasts with ITD-based sound localization in analogous regions in mammals where ITD sensitivity is typically restricted to lower-frequency sounds. Guided by previous experiments and modeling studies of tone-evoked responses of NL neurons, we propose NL neurons achieve high-frequency ITD sensitivity if they respond selectively to the small-amplitude, high-frequency oscillations in their inputs, and remain relatively non-responsive to mean input level. We use a biophysically based model to study the effects of soma-axon coupling on dynamics and function in NL neurons. First, we show that electrical separation of the soma from the axon region in the neuron enhances high-frequency ITD sensitivity. This soma-axon coupling configuration promotes linear subthreshold dynamics and rapid spike initiation, making the model more responsive to input oscillations, rather than mean input level. Second, we provide new evidence for the essential role of phasic dynamics for high-frequency neural coincidence detection. Transforming our model to the phasic firing mode further tunes the model to respond selectively to the oscillating inputs that carry ITD information. Similar structural and dynamical mechanisms specialize mammalian auditory brain stem neurons for ITD sensitivity, and thus, our work identifies common principles of ITD processing and neural coincidence detection across species and for sounds at widely different frequencies.


Assuntos
Localização de Som , Estrigiformes , Animais , Estrigiformes/fisiologia , Neurônios/fisiologia , Localização de Som/fisiologia , Vias Auditivas/fisiologia , Estimulação Acústica , Mamíferos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...